34. Backpropagation
Backpropagation
Now, we're ready to get our hands into training a neural network. For this, we'll use the method known as backpropagation . In a nutshell, backpropagation will consist of:
- Doing a feedforward operation.
- Comparing the output of the model with the desired output.
- Calculating the error.
- Running the feedforward operation backwards (backpropagation) to spread the error to each of the weights.
- Use this to update the weights, and get a better model.
- Continue this until we have a model that is good.
Sounds more complicated than what it actually is. Let's take a look in the next few videos. The first video will show us a conceptual interpretation of what backpropagation is.
Backpropagation V2
Backpropagation Math
And the next few videos will go deeper into the math. Feel free to tune out, since this part gets handled by Keras pretty well. If you'd like to go start training networks right away, go to the next section. But if you enjoy calculating lots of derivatives, let's dive in!
In the video below at 1:24, the edges should be directed to the sigmoid function and not the bias at that last layer; the edges of the last layer point to the bias currently which is incorrect.
Calculating The Gradient 1
Chain Rule
We'll need to recall the chain rule to help us calculate derivatives.
Chain Rule
DL 46 Calculating The Gradient 2 V2 (2)
Calculation of the derivative of the sigmoid function
Recall that the sigmoid function has a beautiful derivative, which we can see in the following calculation. This will make our backpropagation step much cleaner.